-
Notifications
You must be signed in to change notification settings - Fork 0
Clone kafka 18894 #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughReplaces direct Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller as Caller
participant AC as AbstractConfig
participant Pl as Plugins
participant PPl as Plugin<ConfigProvider>
participant CT as ConfigTransformer
participant CP as ConfigProvider
participant Mx as Metrics
Caller->>AC: load configs, request provider for "name"
AC->>Pl: newConfigProvider(config, "name", classLoaderUsage, Mx)
Pl->>CP: instantiate provider class
Pl->>CP: configure(...)
Note right of CP: if Monitorable, will expect withPluginMetrics(Mx)
Pl->>PPl: Plugin.wrapInstance(CP, Mx, tags)
Pl-->>AC: return PPl
AC->>CT: new ConfigTransformer(mapOf(name->PPl))
Caller->>CT: transform(props)
CT->>PPl: p = plugin.get()
PPl-->>CT: CP (underlying)
CT->>CP: get(path[, keys])
CP-->>CT: ConfigData
CT-->>Caller: transformed props
Caller->>PPl: close (Utils.closeQuietly)
sequenceDiagram
autonumber
participant Test as TestHarness
participant Plugin as Plugin.wrapInstance
participant Prov as MonitorableConfigProvider
participant M as Metrics
Test->>Prov: new MonitorableConfigProvider()
Test->>Plugin: wrapInstance(Prov, M, tags)
Plugin-->>Test: Plugin<ConfigProvider>
Test->>M: assert metric exists with tags (config=config.providers, provider=name, class=Prov)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Refacto is reviewing this PR. Please wait for the review comments to be posted. |
Summary of ChangesHello @DDShantanuBadmanji, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant architectural enhancement by wrapping Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Clone kafka 18894TL;DR: Backports RemoteLogManager implementation to an older Kafka version with compatibility adjustments for Java 8. Refacto PR SummaryThis PR adds the RemoteLogManager implementation and related classes for tiered storage support while maintaining Java 8 compatibility. Change HighlightsClick to expand
Sequence DiagramsequenceDiagram
participant Client as Client
participant Broker as Kafka Broker
participant RLM as RemoteLogManager
participant RLMM as RemoteLogMetadataManager
participant RSM as RemoteStorageManager
participant Storage as Remote Storage
Client->>Broker: Fetch Request
Broker->>RLM: Read Remote Log
RLM->>RLMM: Fetch Metadata
RLMM-->>RLM: Remote Log Segment Metadata
RLM->>RSM: Fetch Log Segment
RSM->>Storage: Read Data
Storage-->>RSM: Log Data
RSM-->>RLM: Log Segment
RLM-->>Broker: FetchDataInfo
Broker-->>Client: Fetch Response
Testing GuideClick to expand
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the handling of ConfigProvider instances to wrap them in a Plugin object, which enables metrics for monitorable providers. The changes are consistently applied throughout the codebase, including updates to AbstractConfig, ConfigTransformer, and various components in Kafka Connect. The implementation correctly passes the Metrics registry where available (like in Worker) and handles its absence gracefully in other contexts. New tests have been added to validate the metrics integration and the lifecycle of config providers. The code quality is high, and the changes are a valuable addition for monitoring. I have no further comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java (1)
598-640: Close provider on configure failure; trim provider names; improve plugin tagging
- If configure() throws, the raw provider instance may leak. Close it before rethrowing.
- Trim entries from config.providers to tolerate whitespace.
- Use a provider-specific key for Plugin metrics/log tags (helps diagnostics, even if metrics is null here).
Apply this diff:
- private Map<String, Plugin<ConfigProvider>> instantiateConfigProviders( + private Map<String, Plugin<ConfigProvider>> instantiateConfigProviders( Map<String, String> indirectConfigs, Map<String, ?> providerConfigProperties, Predicate<String> classNameFilter ) { final String configProviders = indirectConfigs.get(CONFIG_PROVIDERS_CONFIG); @@ - for (String provider : configProviders.split(",")) { + for (String providerRaw : configProviders.split(",")) { + final String provider = providerRaw.trim(); + if (provider.isEmpty()) continue; String providerClass = providerClassProperty(provider); if (indirectConfigs.containsKey(providerClass)) { String providerClassName = indirectConfigs.get(providerClass); if (classNameFilter.test(providerClassName)) { providerMap.put(provider, providerClassName); @@ - Map<String, Plugin<ConfigProvider>> configProviderPluginInstances = new HashMap<>(); + Map<String, Plugin<ConfigProvider>> configProviderPluginInstances = new HashMap<>(); for (Map.Entry<String, String> entry : providerMap.entrySet()) { try { String prefix = CONFIG_PROVIDERS_CONFIG + "." + entry.getKey() + CONFIG_PROVIDERS_PARAM; Map<String, ?> configProperties = configProviderProperties(prefix, providerConfigProperties); - ConfigProvider provider = Utils.newInstance(entry.getValue(), ConfigProvider.class); - provider.configure(configProperties); - Plugin<ConfigProvider> providerPlugin = Plugin.wrapInstance(provider, null, CONFIG_PROVIDERS_CONFIG); + ConfigProvider provider = Utils.newInstance(entry.getValue(), ConfigProvider.class); + try { + provider.configure(configProperties); + } catch (Throwable t) { + maybeClose(provider, "AutoCloseable config provider constructed during failed instantiation of '" + entry.getKey() + "'"); + throw t; + } + // Tag with specific provider for clearer metrics/logging even if metrics are null here + Plugin<ConfigProvider> providerPlugin = Plugin.wrapInstance(provider, null, CONFIG_PROVIDERS_CONFIG + "." + entry.getKey()); configProviderPluginInstances.put(entry.getKey(), providerPlugin); } catch (ClassNotFoundException e) { log.error("Could not load config provider class {}", entry.getValue(), e); throw new ConfigException(providerClassProperty(entry.getKey()), entry.getValue(), "Could not load config provider class or one of its dependencies"); } } return configProviderPluginInstances; }Consider LinkedHashMap to preserve declaration order if any downstream relies on it.
🧹 Nitpick comments (11)
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java (3)
641-649: Wrap before configure and close on failure to avoid leaksIf
plugin.configure(...)throws, the instance is not wrapped and never closed. Wrap first and close the wrapper on failure so both the plugin and its metrics (if any) are cleaned up.- ConfigProvider plugin = newVersionedPlugin(config, classPropertyName, null, ConfigProvider.class, classLoaderUsage, scanResult.configProviders()); + ConfigProvider plugin = newVersionedPlugin(config, classPropertyName, null, ConfigProvider.class, classLoaderUsage, scanResult.configProviders()); + Plugin<ConfigProvider> wrapped = Plugin.wrapInstance( + plugin, metrics, WorkerConfig.CONFIG_PROVIDERS_CONFIG, Map.of("provider", providerName)); @@ - try (LoaderSwap loaderSwap = safeLoaderSwapper().apply(plugin.getClass().getClassLoader())) { - plugin.configure(configProviderConfig); - } - return Plugin.wrapInstance(plugin, metrics, WorkerConfig.CONFIG_PROVIDERS_CONFIG, Map.of("provider", providerName)); + try (LoaderSwap loaderSwap = safeLoaderSwapper().apply(plugin.getClass().getClassLoader())) { + wrapped.get().configure(configProviderConfig); + } catch (Throwable t) { + try { wrapped.close(); } catch (Exception e) { + log.warn("Failed to close config provider after configure failure for provider '{}'", providerName, e); + } + throw t; + } + return wrapped;
649-649: Use a more specific metrics “config” tag keyFor better disambiguation across multiple providers, prefer
providerPrefix(e.g.,config.providers.file) as the metrics key instead of the genericconfig.providers.- return Plugin.wrapInstance(plugin, metrics, WorkerConfig.CONFIG_PROVIDERS_CONFIG, Map.of("provider", providerName)); + return Plugin.wrapInstance(plugin, metrics, providerPrefix, Map.of("provider", providerName));If you apply the “wrap-before-configure” change above, apply the same key when creating
wrapped.
633-633: Trim providerName to avoid subtle misconfig from whitespaceA minor hardening to tolerate “file, secret ” style lists.
- String providerPrefix = WorkerConfig.CONFIG_PROVIDERS_CONFIG + "." + providerName; + String normalizedProvider = providerName.trim(); + String providerPrefix = WorkerConfig.CONFIG_PROVIDERS_CONFIG + "." + normalizedProvider;Also use
normalizedProviderin the metrics extra tag:- Map.of("provider", providerName) + Map.of("provider", normalizedProvider)clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java (1)
49-50: Wrapping provider via Plugin is correct; consider a metrics variant.Using null Metrics keeps this suite focused on transform behavior. Optionally add a dedicated test with a Monitorable provider and a real Metrics instance to assert metrics wiring here too, or rely on PluginsTest for that coverage.
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginsTest.java (1)
404-413: Strengthen metrics test by asserting default metric tags.The test guarantees withPluginMetrics is called after configure. To fully validate the documented contract, also assert tags include config=config.providers, class=, and provider=.
Apply this diff to make provider name reusable and assert tags inside the monitorable provider:
@@ public class PluginsTest { + private static final String MONITORABLE_PROVIDER_NAME = "monitorable"; @@ - String providerName = "monitorable"; - String providerPrefix = WorkerConfig.CONFIG_PROVIDERS_CONFIG + "." + providerName; + String providerName = MONITORABLE_PROVIDER_NAME; + String providerPrefix = WorkerConfig.CONFIG_PROVIDERS_CONFIG + "." + providerName; @@ Plugin<ConfigProvider> plugin = plugins.newConfigProvider(config, providerName, ClassLoaderUsage.PLUGINS, new Metrics()); assertInstanceOf(CustomMonitorableConfigProvider.class, plugin.get()); }And in the inner class:
@@ - public static class CustomMonitorableConfigProvider extends MonitorableConfigProvider { + public static class CustomMonitorableConfigProvider extends MonitorableConfigProvider { + @Override + public void withPluginMetrics(PluginMetrics metrics) { + assertTrue(configured); + // Assert default tags + var metricName = metrics.metricName("dummy", "dummy", java.util.Map.of()); + var tags = metricName.tags(); + assertEquals("config.providers", tags.get("config")); + assertEquals(CustomMonitorableConfigProvider.class.getSimpleName(), tags.get("class")); + assertEquals(MONITORABLE_PROVIDER_NAME, tags.get("provider")); + } - @Override - public void withPluginMetrics(PluginMetrics metrics) { - assertTrue(configured); - } }clients/src/main/java/org/apache/kafka/common/config/provider/ConfigProvider.java (1)
32-35: Clarify metrics tag contract to match implementation.Proposed Javadoc tweak in ConfigProvider.java:
- * <p>Implement {@link org.apache.kafka.common.metrics.Monitorable} to enable the config provider to register metrics. - * The following tags are automatically added to all metrics registered: <code>config</code> set to - * <code>config.providers</code>, <code>class</code> set to the ConfigProvider class name, - * and <code>provider</code> set to the provider name. + * <p>If the provider implements {@link org.apache.kafka.common.metrics.Monitorable} and registers metrics, + * Kafka will invoke {@code withPluginMetrics(...)}. Those metrics include default tags: + * <code>config</code>=<code>config.providers</code> and <code>class</code>=the provider’s simple class name. + * An additional <code>provider</code> tag (with the configured provider name) is only added when + * the provider is created via {@code Plugins#newConfigProvider(...)}.connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java (1)
275-280: Optional: wire Metrics to providers if available.Passing null disables plugin metrics for monitorable providers. If a Metrics instance is accessible here, consider supplying it.
clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java (1)
65-68: Update Javadoc to reflect Plugin-wrapped providers.Adjust parameter description to avoid confusion.
- * @param configProviderPlugins a Map of provider names and {@link ConfigProvider} instances. + * @param configProviderPlugins a Map of provider names to Plugin‑wrapped {@link ConfigProvider} instances.clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java (1)
55-57: configure flag set is fine; consider asserting in tests.Optionally verify configured == true where used.
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerTest.java (2)
2904-2927: Solid coverage for monitorable providers; add a tiny assertion and optional cleanup.
- Consider asserting both provider names were instantiated via
newConfigProviderfor clarity.- Optionally call
worker.stop()to release metrics resources.@@ - assertMetrics(metrics, + assertMetrics(metrics, 1, expectedTags(WorkerConfig.CONFIG_PROVIDERS_CONFIG, MonitorableConfigProvider.class.getSimpleName(), Map.of("provider", "monitorable"))); assertMetrics(metrics, 1, expectedTags(WorkerConfig.CONFIG_PROVIDERS_CONFIG, MonitorableConfigProvider.class.getSimpleName(), Map.of("provider", "monitorable2"))); + verify(plugins, times(1)).newConfigProvider(any(AbstractConfig.class), eq("monitorable"), any(ClassLoaderUsage.class), any(Metrics.class)); + verify(plugins, times(1)).newConfigProvider(any(AbstractConfig.class), eq("monitorable2"), any(ClassLoaderUsage.class), any(Metrics.class)); + worker.stop();
2937-2951: Metric assertion works; make it resilient to future default tags.If
MetricConfigever adds default tags, strict map equality could cause false negatives. Prefer containment.- if (expectedTags.equals(tags)) { + if (tags.entrySet().containsAll(expectedTags.entrySet())) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java(4 hunks)clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java(3 hunks)clients/src/main/java/org/apache/kafka/common/config/provider/ConfigProvider.java(1 hunks)clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java(5 hunks)clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java(1 hunks)connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java(2 hunks)connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java(1 hunks)connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java(3 hunks)connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java(3 hunks)connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerConfigTransformerTest.java(5 hunks)connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerTest.java(5 hunks)connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginsTest.java(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java (2)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java (1)
WorkerConfig(58-463)
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginsTest.java (2)
clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java (1)
MonitorableConfigProvider(29-58)clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)
clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java (2)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)clients/src/main/java/org/apache/kafka/common/utils/Utils.java (1)
Utils(93-1700)
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java (2)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java (1)
Plugins(59-696)
clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java (1)
core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala (1)
withPluginMetrics(6299-6301)
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerConfigTransformerTest.java (1)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java (2)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)clients/src/main/java/org/apache/kafka/common/utils/Utils.java (1)
Utils(93-1700)
clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java (1)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)
clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java (1)
clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerTest.java (4)
core/src/test/scala/integration/kafka/server/DynamicBrokerReconfigurationTest.scala (1)
MockFileConfigProvider(1636-1641)clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java (1)
MonitorableConfigProvider(29-58)clients/src/main/java/org/apache/kafka/common/internals/Plugin.java (1)
Plugin(33-92)clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java (1)
Metrics(70-696)
🔇 Additional comments (21)
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java (1)
22-24: Imports LGTMNew imports for Plugin and Metrics are necessary for wrapping config providers with metrics instrumentation.
clients/src/test/java/org/apache/kafka/common/config/ConfigTransformerTest.java (2)
20-21: Import addition is appropriate.Required for Plugin.wrapInstance usage in setup.
54-59: Map.of migration looks good.The switch to Map.of for inputs improves brevity with no behavior change; assertions remain valid.
Also applies to: 63-68, 72-75, 79-82, 86-89, 93-98, 102-105
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerConfigTransformerTest.java (4)
21-22: Import addition is appropriate.Needed for Plugin-wrapped providers.
69-70: Supplying Plugin-wrapped provider is correct for the new API.Null Metrics is fine for these tests; Worker interactions and restart scheduling are unaffected.
75-76: Map.of migration for transform inputs is fine.Concise, immutable inputs; no change in semantics.
Also applies to: 100-101, 115-116, 122-123
150-158: Use of Map.of in ConfigData is appropriate.Immutable payloads for test providers reduce accidental mutation. LGTM.
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/isolation/PluginsTest.java (2)
25-29: New imports are appropriate.Required for Plugin-wrapped providers and metrics assertions.
381-402: Updated newConfigProvider test correctly adapts to Plugin wrapper.Using plugin.get() for assertions maintains previous intent and preserves classloader checks.
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java (1)
26-26: Import looks correct for Plugin usage.clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.java (2)
60-69: Migration to Plugin-wrapped providers looks good.Constructor and field types align with new Plugin model.
Please ensure all callers now pass Map<String, Plugin>.
98-105: Provider access via plugin.get() is correct; null-safe check present.No further action; relies on providers returning non-null ConfigData.
Confirm all test providers return non-null ConfigData to avoid NPEs here.
clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java (1)
35-38: Metrics wiring is fine for test purposes.MetricName construction and Measurable registration look OK.
clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java (1)
23-23: Import addition is correctImporting Plugin aligns with new provider-wrapping flow.
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java (1)
212-223: Cleanup of provider plugins on shutdown is correctly handled WorkerConfigTransformer.close() iterates over all configProviderPlugins and calls Utils.closeQuietly on each, and Worker.stop() invokes workerConfigTransformer.close() after stopping metrics, ensuring all provider plugins are closed.connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerTest.java (3)
40-47: Imports for Plugin-wrapped providers and metrics — OK.These are the right types to support Plugin and monitorable providers.
Also applies to: 98-98, 117-117
2929-2935: Tag construction mirrors Plugin tag semantics — OK.Using
LinkedHashMapis fine; equality checks are order-insensitive anyway.
382-389: API consistency verified Plugins.newConfigProvider(AbstractConfig, String, ClassLoaderUsage, Metrics) is implemented in Plugins.java, invoked with four arguments in Worker.java, and correctly stubbed in WorkerTest. No changes needed.connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfigTransformer.java (3)
23-23: Import forPlugin— OK.Required for the new wrapped provider map.
101-101: Close path now shuts down plugins and their metrics — good.Closing the wrapper ensures both provider and plugin metrics are released.
46-52: ConfigTransformer constructor supports Map<String, Plugin>> Confirmed that ConfigTransformer defines a constructor taking Map<String, Plugin>, matching the new usage—no compilation issues.
| Map<String, Plugin<ConfigProvider>> providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter); | ||
|
|
||
| if (!providers.isEmpty()) { | ||
| ConfigTransformer configTransformer = new ConfigTransformer(providers); | ||
| if (!providerPlugins.isEmpty()) { | ||
| ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins); | ||
| ConfigTransformerResult result = configTransformer.transform(indirectVariables); | ||
| if (!result.data().isEmpty()) { | ||
| resolvedOriginals.putAll(result.data()); | ||
| } | ||
| } | ||
| providers.values().forEach(x -> Utils.closeQuietly(x, "config provider")); | ||
| providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin")); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Always close provider plugins on failure (use finally) to avoid leaks
If ConfigTransformer construction or transform throws, provider plugins aren’t closed. Wrap with try/finally and include provider name in close logs.
Apply this diff:
- Map<String, Plugin<ConfigProvider>> providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter);
-
- if (!providerPlugins.isEmpty()) {
- ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins);
- ConfigTransformerResult result = configTransformer.transform(indirectVariables);
- if (!result.data().isEmpty()) {
- resolvedOriginals.putAll(result.data());
- }
- }
- providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin"));
+ Map<String, Plugin<ConfigProvider>> providerPlugins = Collections.emptyMap();
+ try {
+ providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter);
+ if (!providerPlugins.isEmpty()) {
+ ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins);
+ ConfigTransformerResult result = configTransformer.transform(indirectVariables);
+ if (!result.data().isEmpty()) {
+ resolvedOriginals.putAll(result.data());
+ }
+ }
+ } finally {
+ for (Map.Entry<String, Plugin<ConfigProvider>> e : providerPlugins.entrySet()) {
+ Utils.closeQuietly(e.getValue(), "config provider plugin '" + e.getKey() + "'");
+ }
+ }If ConfigTransformer implements AutoCloseable, we should also close it. Check with:
🏁 Script executed:
#!/bin/bash
rg -nP 'class\s+ConfigTransformer\b.*implements\s+AutoCloseable' -g '!**/test/**'
rg -nP '\bvoid\s+close\s*\(' clients/ connect/Length of output: 42678
🏁 Script executed:
find . -type f -name ConfigTransformer.javaLength of output: 77
🏁 Script executed:
#!/bin/bash
sed -n '1,200p' clients/src/main/java/org/apache/kafka/common/config/ConfigTransformer.javaLength of output: 8388
Always close provider plugins on failure (use try/finally)
Wrap instantiation and transformation in a try/finally to guarantee plugins are closed on any exception—include the provider name in the close log. ConfigTransformer does not implement AutoCloseable, so it need not be closed.
- Map<String, Plugin<ConfigProvider>> providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter);
-
- if (!providerPlugins.isEmpty()) {
- ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins);
- ConfigTransformerResult result = configTransformer.transform(indirectVariables);
- if (!result.data().isEmpty()) {
- resolvedOriginals.putAll(result.data());
- }
- }
- providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin"));
+ Map<String, Plugin<ConfigProvider>> providerPlugins = Collections.emptyMap();
+ try {
+ providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter);
+ if (!providerPlugins.isEmpty()) {
+ ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins);
+ ConfigTransformerResult result = configTransformer.transform(indirectVariables);
+ if (!result.data().isEmpty()) {
+ resolvedOriginals.putAll(result.data());
+ }
+ }
+ } finally {
+ for (Map.Entry<String, Plugin<ConfigProvider>> e : providerPlugins.entrySet()) {
+ Utils.closeQuietly(e.getValue(), "config provider plugin '" + e.getKey() + "'");
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Map<String, Plugin<ConfigProvider>> providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter); | |
| if (!providers.isEmpty()) { | |
| ConfigTransformer configTransformer = new ConfigTransformer(providers); | |
| if (!providerPlugins.isEmpty()) { | |
| ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins); | |
| ConfigTransformerResult result = configTransformer.transform(indirectVariables); | |
| if (!result.data().isEmpty()) { | |
| resolvedOriginals.putAll(result.data()); | |
| } | |
| } | |
| providers.values().forEach(x -> Utils.closeQuietly(x, "config provider")); | |
| providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin")); | |
| Map<String, Plugin<ConfigProvider>> providerPlugins = Collections.emptyMap(); | |
| try { | |
| providerPlugins = instantiateConfigProviders(providerConfigString, configProperties, classNameFilter); | |
| if (!providerPlugins.isEmpty()) { | |
| ConfigTransformer configTransformer = new ConfigTransformer(providerPlugins); | |
| ConfigTransformerResult result = configTransformer.transform(indirectVariables); | |
| if (!result.data().isEmpty()) { | |
| resolvedOriginals.putAll(result.data()); | |
| } | |
| } | |
| } finally { | |
| for (Map.Entry<String, Plugin<ConfigProvider>> e : providerPlugins.entrySet()) { | |
| Utils.closeQuietly(e.getValue(), "config provider plugin '" + e.getKey() + "'"); | |
| } | |
| } |
🤖 Prompt for AI Agents
In clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java
around lines 550 to 560, the provider plugins are instantiated and transformed
but only closed after the success path; wrap the instantiation and the call to
ConfigTransformer.transform(indirectVariables) in a try/finally so that
providerPlugins.values() are closed in the finally block on any exception, and
when closing each plugin use Utils.closeQuietly(plugin, "config provider plugin
" + pluginName) or otherwise include the provider name in the close log;
ConfigTransformer need not be closed.
| public ConfigData get(String path) { | ||
| return null; | ||
| } | ||
|
|
||
| @Override | ||
| public ConfigData get(String path, Set<String> keys) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return non-null ConfigData from get methods to avoid NPEs.
ConfigTransformer expects a non-null ConfigData; current null return can crash tests using this provider.
@Override
public ConfigData get(String path) {
- return null;
+ return new ConfigData(Map.of());
}
@Override
public ConfigData get(String path, Set<String> keys) {
- return null;
+ return new ConfigData(Map.of());
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| public ConfigData get(String path) { | |
| return null; | |
| } | |
| @Override | |
| public ConfigData get(String path, Set<String> keys) { | |
| return null; | |
| } | |
| @Override | |
| public ConfigData get(String path) { | |
| return new ConfigData(Map.of()); | |
| } | |
| @Override | |
| public ConfigData get(String path, Set<String> keys) { | |
| return new ConfigData(Map.of()); | |
| } |
🤖 Prompt for AI Agents
In
clients/src/test/java/org/apache/kafka/common/config/provider/MonitorableConfigProvider.java
around lines 41 to 48, the two get methods currently return null causing NPEs;
replace those null returns with non-null ConfigData instances (e.g., an empty
Map for data and a null/empty version) and in the keyed get(String path,
Set<String> keys) return a ConfigData whose data contains only the requested
keys (or an empty map if none), ensuring both methods always return a valid
ConfigData object.
| Map<String, Plugin<ConfigProvider>> providerPlugins = new HashMap<>(); | ||
| for (String name : providerNames) { | ||
| ConfigProvider configProvider = plugins.newConfigProvider( | ||
| Plugin<ConfigProvider> configProviderPlugin = plugins.newConfigProvider( | ||
| this, | ||
| CONFIG_PROVIDERS_CONFIG + "." + name, | ||
| Plugins.ClassLoaderUsage.PLUGINS | ||
| name, | ||
| Plugins.ClassLoaderUsage.PLUGINS, | ||
| null | ||
| ); | ||
| providers.put(name, configProvider); | ||
| providerPlugins.put(name, configProviderPlugin); | ||
| } | ||
| ConfigTransformer transformer = new ConfigTransformer(providers); | ||
| ConfigTransformer transformer = new ConfigTransformer(providerPlugins); | ||
| Map<String, String> transformed = transformer.transform(props).data(); | ||
| providers.values().forEach(x -> Utils.closeQuietly(x, "config provider")); | ||
| providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin")); | ||
| return transformed; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Close provider plugins in a finally block and skip null plugins.
Prevents resource leaks if transform throws and avoids null values in the map.
- Map<String, Plugin<ConfigProvider>> providerPlugins = new HashMap<>();
- for (String name : providerNames) {
- Plugin<ConfigProvider> configProviderPlugin = plugins.newConfigProvider(
- this,
- name,
- Plugins.ClassLoaderUsage.PLUGINS,
- null
- );
- providerPlugins.put(name, configProviderPlugin);
- }
- ConfigTransformer transformer = new ConfigTransformer(providerPlugins);
- Map<String, String> transformed = transformer.transform(props).data();
- providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin"));
- return transformed;
+ Map<String, Plugin<ConfigProvider>> providerPlugins = new HashMap<>();
+ try {
+ for (String name : providerNames) {
+ Plugin<ConfigProvider> configProviderPlugin = plugins.newConfigProvider(
+ this,
+ name,
+ Plugins.ClassLoaderUsage.PLUGINS,
+ null
+ );
+ if (configProviderPlugin != null) {
+ providerPlugins.put(name, configProviderPlugin);
+ }
+ }
+ ConfigTransformer transformer = new ConfigTransformer(providerPlugins);
+ return transformer.transform(props).data();
+ } finally {
+ providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin"));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Map<String, Plugin<ConfigProvider>> providerPlugins = new HashMap<>(); | |
| for (String name : providerNames) { | |
| ConfigProvider configProvider = plugins.newConfigProvider( | |
| Plugin<ConfigProvider> configProviderPlugin = plugins.newConfigProvider( | |
| this, | |
| CONFIG_PROVIDERS_CONFIG + "." + name, | |
| Plugins.ClassLoaderUsage.PLUGINS | |
| name, | |
| Plugins.ClassLoaderUsage.PLUGINS, | |
| null | |
| ); | |
| providers.put(name, configProvider); | |
| providerPlugins.put(name, configProviderPlugin); | |
| } | |
| ConfigTransformer transformer = new ConfigTransformer(providers); | |
| ConfigTransformer transformer = new ConfigTransformer(providerPlugins); | |
| Map<String, String> transformed = transformer.transform(props).data(); | |
| providers.values().forEach(x -> Utils.closeQuietly(x, "config provider")); | |
| providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin")); | |
| return transformed; | |
| Map<String, Plugin<ConfigProvider>> providerPlugins = new HashMap<>(); | |
| try { | |
| for (String name : providerNames) { | |
| Plugin<ConfigProvider> configProviderPlugin = plugins.newConfigProvider( | |
| this, | |
| name, | |
| Plugins.ClassLoaderUsage.PLUGINS, | |
| null | |
| ); | |
| if (configProviderPlugin != null) { | |
| providerPlugins.put(name, configProviderPlugin); | |
| } | |
| } | |
| ConfigTransformer transformer = new ConfigTransformer(providerPlugins); | |
| return transformer.transform(props).data(); | |
| } finally { | |
| providerPlugins.values().forEach(x -> Utils.closeQuietly(x, "config provider plugin")); | |
| } |
🤖 Prompt for AI Agents
In
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java
around lines 273 to 286, the provider plugins are added to the map and closed
after transform, but if transformer.transform(props) throws you leak resources
and you may insert null plugins into the map; wrap plugin creation/transform
usage in a try/finally where you close all created plugins in the finally block,
and when constructing the map skip adding null Plugin instances (i.e., only put
non-null plugins), and when closing iterate and skip nulls to avoid NPEs.
| } | ||
|
|
||
| public ConfigProvider newConfigProvider(AbstractConfig config, String providerPrefix, ClassLoaderUsage classLoaderUsage) { | ||
| public Plugin<ConfigProvider> newConfigProvider(AbstractConfig config, String providerName, ClassLoaderUsage classLoaderUsage, Metrics metrics) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
API change: verify all call sites and document behavior
Method now returns Plugin<ConfigProvider> and requires Metrics. Ensure all usages are updated and that release notes call out this source/binary change.
Run to find call sites and check that metrics are threaded through:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# All invocations of newConfigProvider
rg -nP --type=java -C2 '\bnewConfigProvider\s*\('
# Quick sanity: confirm files calling it also reference Metrics
rg -nP --type=java -C2 '\bnewConfigProvider\s*\(' \
| sed -n 's/^\([^:]*\):.*/\1/p' | sort -u \
| xargs -I{} rg -nP --type=java -H '\bMetrics\b|\bmetrics\b' {}Length of output: 17259
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
# List all non-test Java files invoking newConfigProvider
files=$(rg -lP --type=java 'newConfigProvider\s*\(' | grep -v '/src/test/' || true)
for file in $files; do
echo "=== $file ==="
rg -nP --type=java -C3 'newConfigProvider' "$file"
echo
doneLength of output: 1541
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "MirrorMakerConfig.java (lines 270–285):"
sed -n '270,285p' connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java
echo
echo "Worker.java (lines 210–225):"
sed -n '210,225p' connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.javaLength of output: 1669
Pass a real Metrics to newConfigProvider in MirrorMakerConfig
- In
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java(around line 275), you’re invoking— replace theplugins.newConfigProvider(this, name, Plugins.ClassLoaderUsage.PLUGINS, null)
nullwith a validMetricsinstance (e.g. create or inject aConnectMetrics.metrics()object) to avoid NPEs and enable metrics collection. - Document in the release notes that
newConfigProvidernow requires aMetricsargument, as this is a breaking API change.
🤖 Prompt for AI Agents
In
connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/Plugins.java
around line 632, the newConfigProvider signature now requires a non-null Metrics
parameter; update callers (notably
connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java
around line ~275) to pass a real Metrics instance instead of null (e.g., obtain
or inject a ConnectMetrics/ConnectMetrics.metrics() instance or create a Metrics
from the current ConnectMetrics context), add necessary imports and wiring to
ensure the Metrics object is available where MirrorMakerConfig constructs the
provider, and update the release notes to state that newConfigProvider now
requires a Metrics argument (breaking API change).
|
/refacto-bot |
|
Refacto is reviewing this PR. Please wait for the review comments to be posted. |
Multi-Domain Review: Connect Compatibility👍 Well Done
📌 Files Processed
📝 Additional Comments
|
| */ | ||
| public class MemoryConfigBackingStore implements ConfigBackingStore { | ||
|
|
||
| private static final Logger log = LoggerFactory.getLogger(MemoryConfigBackingStore.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing Null Check
Logger is defined but never used in the class. This suggests incomplete implementation or forgotten code that could lead to reliability issues when debugging is needed.
Standards
- Clean Code - Remove unused variables
| ConfigInfos result = herder.validateConnectorConfig(config, s -> null, false); | ||
|
|
||
| assertEquals(ConnectorType.SOURCE, herder.connectorType(config)); | ||
| assertEquals(herder.connectorType(config), ConnectorType.SOURCE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
|
|
||
| ConfigInfos result = herder.validateConnectorConfig(config, s -> null, false); | ||
| assertEquals(ConnectorType.SOURCE, herder.connectorType(config)); | ||
| assertEquals(herder.connectorType(config), ConnectorType.SOURCE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
|
|
||
| ConfigInfos result = herder.validateConnectorConfig(config, s -> null, false); | ||
| assertEquals(ConnectorType.SOURCE, herder.connectorType(config)); | ||
| assertEquals(herder.connectorType(config), ConnectorType.SOURCE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| assertEquals(keyConverter.fromConnectData(topic, headers, KEY_SCHEMA, KEY), SERIALIZED_KEY); | ||
| assertEquals(valueConverter.fromConnectData(topic, headers, RECORD_SCHEMA, RECORD), SERIALIZED_RECORD); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| assertEquals(configuration.errorRetryTimeout(), ERRORS_RETRY_TIMEOUT_DEFAULT); | ||
| assertEquals(configuration.errorMaxDelayInMillis(), ERRORS_RETRY_MAX_DELAY_DEFAULT); | ||
| assertEquals(configuration.errorToleranceType(), ERRORS_TOLERANCE_DEFAULT); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| assertEquals(configuration.errorRetryTimeout(), 100); | ||
|
|
||
| configuration = config(singletonMap(ERRORS_RETRY_MAX_DELAY_CONFIG, "100")); | ||
| assertEquals(100, configuration.errorMaxDelayInMillis()); | ||
| assertEquals(configuration.errorMaxDelayInMillis(), 100); | ||
|
|
||
| configuration = config(singletonMap(ERRORS_TOLERANCE_CONFIG, "none")); | ||
| assertEquals(ToleranceType.NONE, configuration.errorToleranceType()); | ||
| assertEquals(configuration.errorToleranceType(), ToleranceType.NONE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| assertEquals(configuration.dlqTopicName(), DLQ_TOPIC); | ||
|
|
||
| configuration = config(singletonMap(SinkConnectorConfig.DLQ_TOPIC_REPLICATION_FACTOR_CONFIG, "7")); | ||
| assertEquals(7, configuration.dlqTopicReplicationFactor()); | ||
| assertEquals(configuration.dlqTopicReplicationFactor(), 7); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| ArgumentCaptor<NotFoundException> exceptionCaptor = ArgumentCaptor.forClass(NotFoundException.class); | ||
| verify(patchCallback).onCompletion(exceptionCaptor.capture(), isNull()); | ||
| assertEquals("Connector " + CONNECTOR_NAME + " not found", exceptionCaptor.getValue().getMessage()); | ||
| assertEquals(exceptionCaptor.getValue().getMessage(), "Connector " + CONNECTOR_NAME + " not found"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| cause.getMessage(), | ||
| "Connector configuration is invalid and contains the following 1 error(s):\n" + | ||
| error + "\n" + | ||
| "You can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| assertEquals(SSLUtils.getOrDefault(map, existingKey, defaultValue), value); | ||
| assertEquals(SSLUtils.getOrDefault(map, missingKey, defaultValue), defaultValue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect Equals Order
The assertEquals parameters are reversed from standard order. In JUnit, the expected value should be the first parameter and the actual value second. This reversal makes test failure messages confusing.
Standards
- JUnit Best Practices - Parameter Order
| Plugin<ConfigProvider> providerPlugin = Plugin.wrapInstance(mockFileConfigProvider, | ||
| null, | ||
| WorkerConfig.CONFIG_PROVIDERS_CONFIG, | ||
| Map.of("provider", "file")); | ||
| when(plugins.newConfigProvider(any(AbstractConfig.class), | ||
| eq("config.providers.file"), | ||
| any(ClassLoaderUsage.class))) | ||
| .thenReturn(mockFileConfigProvider); | ||
| eq("file"), | ||
| any(ClassLoaderUsage.class), | ||
| any(Metrics.class))).thenReturn(providerPlugin); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant Type Casting
The any(Metrics.class) parameter is redundant as it can be replaced with any() since the method signature already defines the type. This makes the code less maintainable if the method signature changes.
Standards
- Clean Code - Avoid Redundant Type Information
| return classLoader -> { | ||
| return new LoaderSwap(Thread.currentThread().getContextClassLoader()); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary Parentheses
The lambda expression has been expanded with unnecessary braces and return statement. This makes the code more verbose without adding any value or clarity.
Standards
- Clean Code - Keep Code Concise
| private String executePut(URI serverUrl, String endpoint, String jsonBody) throws IOException { | ||
| HttpPut request = new HttpPut(endpoint); | ||
| StringEntity entity = new StringEntity(jsonBody, StandardCharsets.UTF_8); | ||
| StringEntity entity = new StringEntity(jsonBody, StandardCharsets.UTF_8.name()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential Character Encoding Issue
The code is changed to use StandardCharsets.UTF_8.name() instead of passing the Charset object directly. This is less reliable as it introduces a string conversion that could potentially cause issues if the StringEntity constructor changes its behavior.
Standards
- Java Best Practices - Use Charset Objects Directly
|
/refacto-bot |
|
/refacto-bot |
|
Refacto is reviewing this PR. Please wait for the review comments to be posted. |
Code Review: Whitespace Changes👍 Well Done
📌 Files Processed
📝 Additional Comments
|
|
No review comments were generated during this review iteration of the PR. |
Delete this text and replace it with a detailed description of your change. The
PR title and body will become the squashed commit message.
If you would like to tag individuals, add some commentary, upload images, or
include other supplemental information that should not be part of the eventual
commit message, please use a separate comment.
If applicable, please include a summary of the testing strategy (including
rationale) for the proposed change. Unit and/or integration tests are expected
for any behavior change and system tests should be considered for larger
changes.
Summary by CodeRabbit
New Features
Refactor
Documentation
Tests